perm filename NOTES.INT[1,JRA] blob sn#433703 filedate 1979-04-17 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00024 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	.<<last label: P1>>
C00004 00003	.ss(In This BYTE)
C00007 00004	.ss(Introduction)
C00010 00005	.Ss(LISP data structures)
C00013 00006	.ss(LISP operations: %3first%1)
C00024 00007	.ss(LISP operations: %3rest%1)
C00026 00008	.ss(Constructors: %3list%1)
C00029 00009	.ss(Constructors: %3concat%1)
C00031 00010	.SS(Recognizers  and predicates)
C00039 00011	.ss(Conditional expressions)
C00043 00012	.ss(The factorial function)
C00048 00013	.ss(%3equal%1)
C00054 00014	.SS(%3reverse%1)
C00062 00015	.ss(Summary)
C00068 00016	.SS(Symbolic mathematics)
C00096 00017	.SS(Tree Searching)
C00123 00018	.SS(Simple computers)
C00140 00019	****lil, and griss(?)
C00149 00020	.next page
C00156 00021	additional  bibliography
C00157 00022	Ray:
C00158 00023	P20
C00162 00024	glossary
C00171 ENDMK
C⊗;
.<<last label: P1>>
.comment
.turn on "#" 
.GROUP SKIP 12 
.BEGIN CENTER 
.LECTURES ON LISP
.by
.John  Allen
.
.copyright %fc%1 1977 by John Allen
.end
.end of comment;
.ss(In This BYTE)
.select 1;
In this issue of BYTE magazine John Allen extends his argument,
begun
in our March 1979 guest editorial, that LISP is an appropriate
vehicle for personal computation.
Within this issue are [⊗⊗⊗] papers exploring various aspects of LISP: the
language, its  applications, and its implementations. Some topics involve
"reality" --the implementation of LISP-like languages for present 
day microcomputers;
others are "mind-stretchers" --the applications of LISP-like languages for the
specification of  problems; these papers  were written by students and researchers
in AI. Thus we invite
 a dialog between the Artificial
Intelligence community and the small computer user. One
reason for expecting this to be a fruitful venture is that
in the AI community %2all%1 machines are small! Both discussants in this
dialog expect  their "small" machines to perform 
at the limits of the hardware. Further, both groups  frequently use their 
machines in an exploratory fashion, attempting to capture a partially
understood phenomenon in an algorithmic form. This exploratory style
is supported in both  groups by  highly interactive  computing.
Neither AI machines nor personal computers cater to a batch-processing style of
computation. Given these striking similarities 
and the interest of the BYTE readership in things AI, it follows
 that the languages of AI should be of interest and value
to the personal computation audience. The basis for modern AI languages
is LISP. 
.ss(Introduction)
This  paper  introduces  the language  and 
develops a perspective
from which to view the  detailed articles.

LISP is simple and difficult, elegant and ad#hoc; it is a beautiful blend
of foresight and fortuity. LISP is a programming language, often characterized
as a "special purpose list-processing language". But LISP is no more a special
purpose programming language than mathematics is a "special purpose
language for floating-point computations". 
Just as there's more to
mathematics than the accounting and bookeeping properties present in
"general purpose" programming
 languages, there's much more to LISP
than "just another programming language".


The best description of the LISP programming language is that %3it is  a
high level machine language%1. That is, it shares many of the
facets of contemporary  machine language --the necessity for attention to
detail, the freedom to manipulate the machine's data and programs without
restriction--  yet LISP is "high level" in that  the language contains
the expressive power and convenience of traditional high level languages.
The "contradiction" is resolvable: a LISP machine is just a higher level
machine whose data items are organized differently than the 
binary bit patterns of most machines, and the LISP programming 
language is the "assembly language" for this machine.
.Ss(LISP data structures)
Before introducing the constructs of the language, we
discuss the  data items of the language.
In a traditional language we would  find numeric constants.
In LISP the analogous
 constants are called %2atoms%1. An atom is either a numeral
or a %2literal atom%1 --a string of upper case alpha-numeric 
characters such
that the first character in the string is an alphabetic character.
For example, %3ABC123, 12, %1and %3NIL%1 are atoms, but %31A2%1 and %3(A#B)%1
are not.

LISP also has composite constants called %2lists%1. Lists are built out of
atoms and other lists as follows:
.PT24
.FP
%21.%1 Any atom or list can be an element of a list.
.FP
%22.%1 Given any collection e%41%1,#...#,e%4n%1 of list elements,
then %3(%1#e%41%1#...#e%4n%3 )%1 is also a list.
.pt24
So %3(A B)%1 is a list; as is %3(A B C)%1, and %3(A 1 (ABC 23)) %1.
The last example is a list of three elements;
its third element is also a list#--#of two elements: the atom %3ABC%1
and the numeral %323%1.

Atoms and lists are the basic LISP data structures which will concern
us throughout most of these papers. However,
a robust production version of LISP
includes many more data objects including  arrays, arbitrary precision numbers,
strings, and representation of functions as data objects.
Regardless of the scope of the data representations
 in a specific LISP implementation, it is a  fundamental property that
all data objects are "first class objects", constructible,
testable and available without restriction.   This uniform
behavior  of data
is a property shared by few other languages.
.ss(LISP operations: %3first%1)
We  need some
operations on these data structures. 
Just as we should have  a subtraction operation in arithemetic
machines to decompose numbers, we have LISP instructions to decompose
lists. 
 One such operation is  %3first%1; it
 extracts the first element of a list. For example:
.BEGIN CENTERIT;
←%3first[%3(A B C)%*] %1gives %3A%1
.pt24
.END
This  example is written in LISP's "external syntax" called
 "meta LISP"; it is an instance of prefix notation.
The programming language --the 
"internal notation"-- is called "S-expression LISP" or S-LISP.
Initially,
we will present algorithms  in the M-expression
syntax since it is close to   more traditional  programming notation;
however, since S-LISP is
 our machine language
we will insist on developing facility with that notation.

In a traditional architecture, both instructions and data are stored
in memory. The processor usually has complete
freedom to manipulate any of these objects either as data  or as instructions.
An object accessed by the instruction counter is interpreted as an instruction;
other accesses to items usually imply a data interpretation.
One goal is the representation of LISP instructions as data items in
the LISP machine such that the processing unit of the 
LISP machine will have equal
flexibility in interpreting the encoded information. An object 
may sometimes play the role of program, and sometimes data.

To represent program as data
we must specify a translation of each M-LISP instruction 
into a list representation.
.BEGIN CENTER;

%2External Notation%1
%1<operation>%3[%1 <operand>%41%3;%1 ...%3;%1 <operand>%4n%3 ]%1 
.END

.BEGIN CENTER;

%2List Notation%1
%3( %1<operation>%8T%1 <operand>%41%8T%1 ... <operand>%4n%8T%3 )%1 
.END
.PT24;FP
where the %8T%1 means perform the translation process
recursively.

For this translation to be meaningful  we must also describe 
how the recursion process is to terminate.
.pt24
.BEGIN INDENT 0,2,2;
%21.%1 An "operation" in external notation is something like "%3first%1"
or "+", whereas an "operation%8T%1" must be an atom
or a list. 
We translate the operation name to
an appropriate atom: %3first%1 translates to
%3FIRST%1, + to %3PLUS%1, and * to %3TIMES%1.
.PT24;INDENT 0,2,2;
%22.%1 The operand of %3first[(A B C)]%1 is the constant %3(A B C)%1.
We will translate a  constant %7a%1 to the construct 
%3(QUOTE#%7a%3)%1. For example,  we represent the constant %3(A#B)%1 as
%3(QUOTE#(A#B))%1. This solution is similar to the "quoting" convention
of natural language: Cleveland is a city, but "Cleveland" is a nine-letter
word. The %3QUOTE%1 operator is more than simple pedantry; it
will play a critical role in the "fetch" operation of the LISP machine.
.END
.PT24
To summarize,
our list notation consists of a representation of the operation
 followed by the 
representations of  the operands.
Those operands may themselves specify operations, or may specify
constant operands by using the %3QUOTE%1 operation. For example, 
we  represent 
%3first[(A#B#C)]%1 as %3(FIRST (QUOTE (A B C))) %1and
%3(FIRST#(FIRST#(QUOTE#((A#B)#C))))%1 represents %3first[first[((A#B)#C)]]%1.
.PT24
Values are obtained on a LISP machine  much like one  obtains values from
a  pocket calculator.
We type in an S-LISP
expression, and the calculator displays the result.  The evaluation of an
 expression can be quite involved; if an operand specifies a further
operation, then the current instruction must be suspended while that subsidiary
computation is performed.
So, evaluating ###%3(FIRST (FIRST (QUOTE ((A B) C))))%1###  would involve the following:
.BEGIN indent 2,2,2
The leftmost %3FIRST%1 must wait since its operand requires evaluation;
similarly the next %3FIRST%1 must wait to take care of 
its argument. But its argument is a %3QUOTE%1-ed expression.
#%3QUOTE%1 is kind, requiring no further computation,
but always returns its argument as value. 
Here it returns the list %3((A#B)#C)%1. The inner 
%3FIRST%1 completes now, returning %3(A#B)%1 to the outermost
%3FIRST%1; it is nudged into activity and finally returns %3A%1.
.END 

Consider###%3(FIRST (QUOTE (FIRST (QUOTE (A B))))%1. Notice that
the  embedded expression %3(FIRST#(QUOTE#(A#B))%1 has the appearance of a 
LISP instruction.
However, that expression is surrounded by %3(QUOTE#...#)%1;
Therefore it is simply a list; i.e., a constant. The final result of the
evaluation will be the atom %3FIRST%1 (since the computation
encodes the M-expression %3first[(FIRST (QUOTE (A B)))]%1).

Since %3QUOTE%1-d expressions appear so frequently,
we  introduce an abbreviation. We write  ####%3(QUOTE#%7a%3)%1 as %9'%7a%1
%1So the previous example ###%3(FIRST#(QUOTE#(FIRST#(QUOTE#(A#B))))%1
could be expressed as: ###%3(FIRST#%9'%3(FIRST#(QUOTE#(A#B)))%1;
or as  %3(FIRST#%9'%3(FIRST#%9'%3(A#B))%1.


.ss(LISP operations: %3rest%1)
We also have an "instruction" named %3REST%1.
You may either think of the
instruction as a machine operation or as the translation of
an M-LISP expression. %3REST%1,
 like %3FIRST%1, expects a list  as its
argument. However, %3REST%1 returns a value 
representing the list with the first element removed.

.BEGIN CENTERIT;group;

←%3(REST %9'%3(A B C)) %1yields %3(B C)
←%3(REST %9'%3(B C)) %1yields %3(C)
.apart;

.END
%1What about: %3(REST %9'%3(C)) %1? 
When we remove the last element from a list
we get the empty list; its representation in LISP is "%3(#)%1".

The operations %3first%1 and %3rest%1 are called %2selector%1
functions since they  are used to select  components from a composite
data object.
.ss(Constructors: %3list%1)
Besides decomposing objects, we must be able  to builds new  objects.
The general name for an operation  which build new objects is a %2constructor%1.
One LISP constructor  is %3LIST%1; Here are some examples of its usage.
.PT24
.BEGIN INDENT 0,5,5;
%21.%1 %3(LIST %9'%3A %9'%3B %9'%3C)%1 yields %3(A B C)%1.

.group;
.indent 0,5
%22.%1 %3(LIST 2 %9'%3B)%1 yields %3(2 B)%1.  Note
that we didn't "%3QUOTE%1" the %32%1; the 
machine understands that numbers are constants.
Also, the %3LIST%1 operation will take an arbitrary number of operands;
three in our first example, two in this one, and none in the next.

.indent 0,5,5
%23.%1 %3(LIST )%1 yields %3( )%1.
.END
.pt24
As with the other instructions (except %3QUOTE%1), %3LIST%1  can
handle instructions as operands.
.PT24
.FP
Try: %3(LIST (FIRST (QUOTE (A))) (REST (QUOTE (A B))) (QUOTE C))%1.
.PT24
Dilligence may have been rewarded and you  may have responded:
"%3(A#(B)#C)%1". There's an equal probability that you got mired in 
parenthesis-counting and responded "(?!$≡)". One solution is to resort to
M-LISP and  recast the expression as: %3list[first[(A)];rest[(A B)];C]%1

Since  we must develop our S-LISP expertise,
we might 
also use our abbreviation:##%3(LIST (FIRST %9'%3(A)) (REST %9'%3(A B)) %9'%3C)%1.

A more general technique is 
%2pretty-printing%1. 
Pretty-printing exploits 
additional lines and spaces to  highlight the structure in 
complex expressions. For example:

.BEGIN CENTER SELECT 3;TABIT3(8,45,52);
(LIST\(FIRST (QUOTE (A)))\(LIST\(FIRST %9'%3(A)) 
\(REST (QUOTE (A B)))%1#######or%3 \\(REST %9'%3(A B)) 
\(QUOTE C))%1\\%9'%3C)%1
.END
.PT24
.FP		
In a modern LISP implementation
we would find further aids for locating matching parentheses, just as an
interactive Algol-like language should have aids for locating matching
"begin-end" pairs.

.ss(Constructors: %3concat%1)
Another S-LISP operation for building lists is %3CONCAT%1.
It is a two-operand instruction; its first operand can either
be an atom or a list, but its second operand %2must%1  reference a list.
The effect of %3CONCAT%1 is to build a new list
whose first element is the first argument of the %3CONCAT%1 and  remainder
of the new list is the second operand  of %3CONCAT%1.
For example %3(CONCAT##%9'%3A##%9'%3(B))%1##
would evaluate to %3(A#B)%1. 

Note that %3LIST%1 
 takes an arbitrary number of arguments
and builds a list whose elements are those arguments.
On the other hand, %3CONCAT%1 takes 
only two arguments --an element and a  list--  and adds the
element to the front of the list. For example,
.BEGIN tabit2(15,45); SELECT 3;

\(LIST##%9'%3(A)##%9'%3(C)) \%1gives%3 ((A) (C))%1
while\%3(CONCAT##%9'%3(A)##%9'%3(C))\%1gives%3 ((A) C)%1
.END
.PT24
These constructors can be used at anytime to compose new data objects.
Now we can decompose lists and make new ones. We can perform evaluation of
simple expressions, much like the facilities of a hand calculator.
Soon we will show how to add new operations to the LISP calculator.
.SS(Recognizers  and predicates)
In traditional assembly language programming we find 
instructions which test for  zero or compare two numbers.
In LISP we manipulate data objects built from
atoms and lists. The "zero" of lists is the empty list, %3(#)%1; 
ans so we include
a test for  %3(#)%1.
Since elements of a list can either be atomic or lists themselves
we include a test for "atom-ness", %3atom%1.
Finally, we must be able to distinguish between two non-identical atoms 
using an equality  test.

All LISP operations compute values. The values which our  previous
operations produced were atoms or lists; these new operations
called %2predicates%1 produce "truth values" --true
or false. In M-LISP, we represent true and false as %et%1 and
%ef%1;
however, in S-LISP, these truth values must be represented as
data items and so
we pick the atoms %3T%1 and %3NIL%1 as their representations.

.PT24
.BEGIN INDENT 0,7,5;
%21. %3EQ%1: Compare two atoms. That is, %3EQ%1 is a two-address
instruction which gives value %3T%1 just in the case that
those operands represent the same atom.

.INDENT 0,10,5
%22. %3ATOM%1: This single-address instruction gives %3T%1 if its operand
is an atom, and gives %3NIL%1 otherwise. 

.INDENT 0,10,5;
%23. %3NULL%1: This  single-address instuction gives %3T%1 just in the 
case that its operand is the empty list, %3(#)%1.
.END
.PT24

.BEGIN GROUP;TABIT2(20,56)
For example:\%2S-LISP\M-LISP%3
\(ATOM##%9'%3A) %1gives %3T%3\atom[A] %1gives %et%3
\(ATOM##%9'%3(A)) %1gives %3NIL%3\atom[(A)] %1gives %ef%3
\(EQ##%9'%3A##%9'%3B) %1gives %3NIL%3\eq[A;B] %1gives %ef%3
\(NULL##%9'%3(A B)) %1gives %3NIL%3\null[(A B)] %1gives %ef%3
.END
.pt24
.GROUP;
Since the predicates are value-producing they can be used with the other
list-manipulating operations:
.BEGIN SELECT 3;TABIT2(10,23);group;

\(CONCAT\(ATOM##%9'%3A)
\\(LIST 1##%9'%3A))                 %1gives %3(T 1 A)%1 
.END
.APART

Notice that the %3atom%1 predicate  is of slightly different character than
%3eq%1 and %3null%1. Namely %3atom%1 is performing a "type test" on a 
data structure; such predicates are called %2recognizers%1.
.ss(Conditional expressions)
Clearly, the meaningful use of predicates and recognizers requires
the existence of language constructs to modify the program flow.
Such constructs are called %2control structures%1. One
 basic control unit in LISP is called the %2conditional expression%1.
In M-LISP it is written:
.PT24
.FP
%3[%2<p%41%2>%3 → %2<e%41%2>%3;%2 <p%42%2>%3 → %2<e%42%2>%3;%1
 ... %et%3 → %2<e%4n%2>%3]%1
.PT24
.cond:
The %2meaning%1 of such a conditional expression is as follows:
.BEGIN INDENT 5,5,5
Each %2<p%4i%2>%1 is a predicate; the %2<e%4i%2>%1's are arbitrary LISP
expressions. We evaluate the 
%2<p%4i%2>%1's from left to right, finding the first which gives value
"true". The value of the conditional xpression is the value of the corresponding
%2<e%4i%2>%1.  If none of the
%2<p%4i%2>%1's are true, then the value of the conditional is 
 %2<e%4n%2>%1.
Notice that this last case is really forced upon us since the last
"predicate" is the constant 
%et%1. It is common to read %et%1 
used in this
context as "otherwise".
.END


.GROUP;
.fp
We extend our M-to-S mapping to include this new construct, mapping it to:

.BEGIN TABIT1(11);

%3(COND\(%1<predicate%41%1>%8T%1  <expression%41%1>%8T%3)
\(%1<predicate%42%1>%8T%3  %1<expression%42%1>%8T%3)
\... 
\(%3T%3  %1<expression%4n%1>%8T%3))%1
.END
.apart
.FP;PT24
The evaluation of a conditional expression is different from the
technique we have used in previous LISP instructions. Previously we have
insisted that we evaluate all of the operands in an instruction. In the conditional
expression, we evaluate the minimal part of the conditional
which gives us a true predicate; then we evaluate the corresponding expression.

.GROUP;
For example: %3(COND ((ATOM##%9'%3A)##%9'%3FOO)#(T#1)) %1gives value %3FOO%1 
since %3(ATOM##%9'%3A)%1 gives %3T%1.
%3(COND#((ATOM##%9'%3(A))##%9'%3FOO)#(T#1)) %1gives value %31%1 
since %3(ATOM##%9'%3(A))%1 gives %3NIL%1

.apart
We have introduced all the instruments in the LISP orchestra. Now it's
time to make some music.

.ss(The factorial function)
.group
Our first example is the  venerable LISP program   
 to compute the factorial function:
.BEGIN GROUP;TABIT2(10,17)

\\%31%1 if %3n%1 is %30%1
\%3n!%1 =\%3n*(n-1)!%1 if %3n%1 %9≠ %30%1

.END
.APART;
.FP
We want to convert this description 
 into a  LISP algorithm.
The "if"-structure can be converted into a  conditional expression,
and we can name the new operation %3fact%1. We
  assume our LISP machine has  such a multiplication operation 
named %3times%1;
we also
assume the existence of a 
simple subtract-by-one function, %3sub1%1.

.GROUP;
.fp
Here's the body of a factorial algorithm in M-LISP:
.BEGIN SELECT 3;center;

[eq[n;0] →1; %et%3 → times[n;fact[sub1[n]]]]
.end
.APART;
Notice the occurrence of the function name %3fact%1 in the body;
it is the name of the function we are defining, and somehow
we must associate that name with the body. We  symbolize that
association using "<=". For example:
.BEGIN SELECT 3;center;
fact[n] <= [eq[n;0] →1; %et%3 → times[n;fact[sub1[n]]]]
.end

.GROUP;
.fp
and here's its pretty-printed translation in S-LISP:
.BEGIN GROUP;SELECT 3;TABIT1(30);
.KRK
.fp
(DEF FACT (N) (COND\((EQ N 0) 1)
\(T (TIMES N (FACT (SUB1 N))))))
.END
.APART;
.pt24
The new ingredient in these definitions is the use of %2recursion%1.
A typical recursive definition  has several characteristics:

.BEGIN INDENT 0,2,2;
%21.%1 The body of the definition should be a conditional expression.
 A definition like %3foo[x]#<=#baz[foo[bar[x]]]%1 will cause nothing
but grief. The conditional expression will contain two basic
parts: the %2termination case%1 and the %2general case(s)%1.
.indent 0,2,2;
%22.%1 The termination case describes what to do when a primitive data structure
is recognized. We
 consider the integers built from zero, using the successor 
function, %3add1%1. Therefore, our termination case in %3FACT%1 involves
recognition of %30%1, and terminates with value %31%1.
.indent 0,2,2
%23.%1 The general cases involve "composite" data structures.
We can
decompose a positive (composite) integer down to zero by  a sequence
 of subtract-by-one operations.
The essential idea is that reducing the complexity of the argument
in a recursive  call will thereby reduce the complexity of the problem.
That's an old trick; what recursion says is that we can solve the
original problem by reducing it to a simpler case of the %2same%1 
problem. If we persist, the problem will solve itself before we get tired
of  reducing; it's like dieting.
.pt24
.END
Recursive definition is similar to inductive description, like those
we gave for defining lists  or the M-to-S mapping.
The  techniques involved in finding the right
inductive steps are similar to those involved in finding the right
decomposition in a recursive definition.
Recursive definition 
is a  powerful descriptive technique; fortunately it can also be implemented
as a very efficient computational mechanism.
.ss(%3equal%1)
For a further example,
 assume that we want to test the equality of
two lists, where equality means that each element of  two lists is identical
and the order in which those elements occur is identical. The identity relation
also extends to sub-elements of lists. For example:

.BEGIN GROUP;TABIT2(10,45)
.krk
\%2equal\ non-equal%1
\%3(A B C)   (A B C)\(A B C)  (A B D)
\%3(A (B C) D)   (A (B C) D)\(A (B C) D)  (A D (B C))
\%3( )  ( )\(A (B (C) D))  (A B C D)
.END
.pt24
.APART;
.fp
Let %3EQUAL%1 be
an algorithm to compute this extended equality; it will
  be recursive.
Regardless of the complexity
of objects, all we need to do is find the
right way to decompose them, and then pounce on the pieces.
The decomposition operators we have for lists are %3FIRST%1 and %3REST%1.
We also have to stop the decomposition. In %3FACT%1 we tested
for the occurrence of zero; in %3EQUAL%1 we  test for the occurrence of
an empty list, and since we are assuming that  elements of a list may either
be sublists or atoms, we need to test for the occurrence of an atom.
Let's try  the simplest possible case first: the empty list.

.BEGIN SELECT 3;GROUP;NOFILL
.PT18
.FP
(DEF EQUAL (X Y) (COND ((NULL X) ...? )
.END
.pt24
.fp
What should we do? If %3x%1 is empty, then we will only have equality
if %3y%1 is also empty; otherwise we will have an inequality.
.BEGIN SELECT 3;GROUP;TABIT3(8,18,40);
.krk
(DEF EQUAL (X Y) 
\(COND \((NULL X) (COND\((NULL Y) T)
\\\(T NIL)))
\\...?)
.END
.fp
Note that we embedded a conditional expression
within a conditional expression. 
 Note also that the interior conditional returns either
%3T%1 or %3NIL%1; but that's what we wanted since %3EQUAL%1 is to encode
a %2predicate%1 and %3T%1 and %3NIL%1 are our representations of the
truth values %et%1 and %ef%1.
Note too that we depend on the order dependence of the conditional evaluation;
we won't test the %3(NULL#Y)%1 expression unless %3(NULL#X)%1 is true.
We won't get to the "%3#...?%1" condition unless %3(NULL#X)%1 is false.

.GROUP;
We can still have
%3x%1 non-empty and %3y%1 empty; let's take care of that:

.BEGIN SELECT 3;GROUP;TABIT3(8,18,40);
.krk
(DEF EQUAL (X Y) 
\(COND (\(NULL  X) (COND\((NULL  Y) T)
\\\(T NIL))
\\((NULL  Y) NIL)
\\...?)
.END
.APART;

%2Now%1 the "%3...?%1" has been reduced to the case that %2both%1
lists are non-empty, and we can massage the pieces with %3FIRST%1
and %3REST%1.  We look at the %3FIRST%1 pieces; if they're equal, then
our decision on the equality of the original lists depends on the
equality of the remainders (or %3REST%1s) of the  lists. If the
%3FIRST%1s are %2not%1 equal, then we can stop immediately with a 
false indication.  This analysis yields two cases: 1)#if the first
elements are atomic, then use %3EQ%1 to check their equality; 2)#otherwise
use %3EQUAL%1 itself on the first elements. Here we go:


.BEGIN SELECT 3;GROUP;TAbS 3,12,34,46;TURN ON "\";NOFILL;
.KRK
(DEF EQUAL(X Y) 
\(COND(\(NULL X) (COND\((NULL Y) T) 
\\\(T NIL))
\\((NULL Y) NIL)
\\((ATOM (FIRST X)) (COND\((ATOM (FIRST Y)) (EQ X Y)) 
\\\\(T NIL)))
\\((ATOM Y) NIL)
\\((EQUAL (FIRST X)(FIRST Y)) (EQUAL (REST X)(REST Y)))
\\(T NIL)))
.END

.SS(%3reverse%1)
So  far our examples have either been numerical or have been predicates.
Predicates only require traversing existing lists; certainly  we will
want to write algorithms which build new lists.  Consider the problem
of writing a LISP algorithm to  reverse a list %3x%*. There is a simple informal
computation: take elements from the front of %3x%*  and put them
onto the front of a new list %3y%*. Initially, %3y%* should be  %3(#)%* and
the process should terminate when %3x%* is empty. 
.GROUP;
For example, reversal of the list %3(A B C)%1 would produce the sequence:

.BEGIN SELECT 3; TABIT2(32,49);

\x\y
\(A B C)\( )
\(B C)\(A)
\(C)\(B A)
\( )\(C B A)
.END
.APART
The %3reverse%* function will build the new list 
by  %3concat%*-ing
the elements onto the second argument of %3rev%9'%*%1.

.BEGIN CENTERIT;SELECT 3;GROUP;
.P97:

←reverse[x] <= rev%9'%*[x;( )]

←rev%9'%*[x;y] <= [null[x] → y; %et%3 → rev%9'%*[rest[x];concat[first[x];y]]]
.APART
.END
.fp
 Since %3y%* was initialized
to %3(#)%* we are assured that the resulting construct will be a list.

We leave it to the reader to translate this algorithm into S-LISP.

.GROUP;
As a final example, here's another algorithm for the factorial function.
You might be amuse yourselves by proving the equivalence of the two
formulations.
.BEGIN TABIT1(10);SELECT 3;GROUP;CENTERIT;

←fact%41%*[n] <= fact%9'%*[n;1];

←fact%9'%*[n;m] <= [eq[n;0] → m; %et%3 → fact%9'%*[sub1[n];times[n;m]]]
.END
.APART;
.ss(Summary)
Those of you who have heard about LISP programming before know that
LISP's two major characteristics are: lots of parentheses and
strange function names like %3car%1, %3cdr%1, and %3cadadr%1.
By now you should at least understand %3why%1 the parentheses
are used, if not understanding totally why the representation is
a benefit rather than a curse.

LISP's second characteristic is definitely a blemish; more to the
point, it's a commentary on the state of LISP %3programming%1, rather than
the language. When we examine the very low-level representation of
LISP operations, we see that the primitive selection operations
of LISP data structure can be described as selecting either the left or
right branch of a binary graph;  %3car%1
and %3cdr%1 are these selection functions,
and %3cadadr%1 is an abbreviation for a composition of
these operations. Since  all LISP data structures
(in our simple subset, remember) must ultimately be representable
as combinations of atoms and binary graphs, then clearly all algorithms
must ultimately be expressible as manipulations of graph structure
involving 
 %3car%1, %3cdr%1, and a function to
construct new graphs, %3cons%1. Most LISP programs are constructed in just
such a fashion. The result is unsatisfactory from at least two
views. First, the programs become  almost totally unreadable; instead of
couching the data structure abstractly in terms of the 
%3concept%1 --recognizer: %3is_dog[x]%1; selectors: %3left_eye[x]%1, 
%3tail[x]%1,#...;
and constructor(s): %3make_dog[x%41%3;#...#x%4n%3]%1--,
 the programmer performs the transformation %3mentally%1 and gives us
%3eq[cadr[x];DOG]%1, %3cadaddr[x]%1, and %3cons[x;#[[[cons[z;y]#...]%1,
which borders on gibberish. Neither the programmer nor a reader has much
chance of remembering what is going on. 

An equally serious problem is that this style of programming deeply intertwines
conception and implementation. Given that a new representation of
"dog-ness" is required, the programmer must search out all areas of program
which use the arcane encoding and replace them %3very carefully%1.

Essentially there are two solutions to this problem: require that the
programmer spell out detailed rules for data structuring a'la
Pascal. Of course there's no reason to suppose that the programmer's
ability to remain abstract will survive any better here. Indeed since Pascal
really supplies "abstract %3storage%1 structures"  rather that "abstract
%3data%1 structures", along with the requisite verbiage of a typed
language, there are reasons to believe the the programming process
will suffer in the long run. The alternative is to supply the programmers
with an exceptional programming tool and imbue them with the understanding
of abstraction, modularity
and the power of their tool. It may be naive to believe that programmers
can be self-disciplined, but the alternatives are not at all attractive.


We will will now explore several reasonably detailed examples,
both as an introduction to the authors and to show LISP  is a setting
more concrete that the manipulation of abstract  list structure.
We need to show #how to relate abstract ideas
to concrete reality. A recurrent theme in  these examples  is a 
delicate balance between realistic abstraction and overspecification.
.SS(Symbolic mathematics)
When we use a calculator we expect to receive
answers to questions like
 "What's %34+(2*2*3)+9%1?"
Some calculators can even answer questions like
  "What's 
%3x%82%3#+#2xy#+y%82%1, when %3x%1 is %32%1 and %3y%1 is %33%1?"
Questions like "What's %32x+y+3x%1?" are not within their province
 even though a 
person might  respond "%35x+y%1". This is a problem of "simplification"
of algebraic expressions; however,
 a "simplification" in one context
is a "complexification" in another. Is 
%34+y(4+y)%1  simpler than %34+4y+y%82%1  or %3(2+y)%82%1 or %34(1+y)+y%82%1?
We can find circumstances in which any one of these expressions is the 
appropriate "simplification" given that the expression combines with a 
larger expression.

Rather than become involved with these "aesthetic" questions immediately,
we wish to examine a more algorithmic
 computation: differentiation of polynomials.
Differentiation %2is%1 a symbolic operation; it operates on expressions and
produces expressions. 
If you'd rather not read about calculus you may safely skip to another section; 
the remainder of the material  is not  dependent on this section.

Differentiation is expressed  recursively; for example,
the derivative of  a sum is reduced
to the ability to differentiate each summand and form a new expression
from those results:
#d[%3f#+#g%*]/d%3x%*#=#d%3f/%*d%3x#+#%*d%3g%*/d%3x%1, with
similar relationships 
holding for products, differences, and powers.
As usual  in recursive definitions, there must be termination
conditions:
differentiation of a variable, %3x%*, with respect to %3x%* is defined to be 1;
differentiating a constant, or a variable not equal to %3x%* with
respect to %3x%* gives a result of zero.  
More generally, if %3u%* is a constant or a variable then:

.GROUP
.BEGIN TABIT2(20,30);
\d%3u%*/d%3x%* =\1 if %3x = u
\\%10 otherwise
.END
.APART


Though polynomials can be arbitrarily complex,
 their general format is very simple:

.BEGIN INDENT 0,5;group;
%21.%*  Any term is a polynomial.
.krk
%22.%*  If p%41%* and p%42%* are polynomials then the "sum"
of p%41%* and p%42%* is a polynomial.
.krk
%23.%*  constants and variables are terms.
.krk
%24.%*  If t%41%* and t%42%* are  terms then the "product" 
of t%41%* and t%42%* is a term.
.krk
%25.%*  If t%41%* is a variable and t%42%* is a constant then 
"t%41%* raised to the t%42%*%8th%1 power" is a term.
.krk
%26.%*  If t%41%* is a term then "minus" t%41%*  is a term.
.apart;
.GROUP;
.krk
.END

.APART
.GROUP;

We assume that binary plus, times, and exponentiation are 
symbolized by +, *, and ↑. 
We write all expressions 
in prefix notation like M-LISP, where the operation 
precedes its
operands. For example,
write %3+[x;2]%* instead of   %3x+2%*.
Here is  a BNF description  of the above set using 
expressed in prefix notation:


.BEGIN TABIT1(13);GROUP;
<poly>\::= <term> | +[<poly>;<poly>]
<term>\::= <const> | <var> | *[<term>;<term>] | ↑[<var>;<const>]
\::= -<term>
<const>\::= <numeral>  
<var>\::= <identifier>

.END
.APART;

As we have seen, it is easy to write recursive
algorithms in LISP; but LISP operates on lists and atoms.
We should assure ourselves that we can express polynomials as lists.
We use exactly the scheme we applied to the M-to-S LISP mapping.
.GROUP;
.fp
For example:####%3x%82%* + 2yz + u%1####
will be translated to the  prefix notation:
.fp
%3[↑[x;2];#+[*[2;*[y;z]];#u]] %1. 
#####From this it's easy to get the list form:
.fp
.begin  center
%3(PLUS (EXPT X 2)  (PLUS (TIMES 2 (TIMES Y Z)) U))
.end
%1
.APART
.pt18;
We will represent the d-operator as a binary  LISP function named %3diff%*.
The application, d%3u%*/d%3x%* will be represented as %3diff[u;x]%*.
Recalling our termination case, we write:
.BEGIN TABIT2(20,30);select 3

diff[u;x] <= [isvar[u] → [samevar[x;u] → 1; %et%* → 0];
\    ?]
.END
.APART
Notice the use of the abstract predicates %3isvar%1 and %3samevar%1.
Before we run this program we must of course supply these components,
but for the development of the algorithm this style is much
clearer; and for modularity of programming  we
 refrain from encoding the representation (i.e. replacing
%3isvar%1 by %3atom%1 and %3samevar%1 by %3eq%1) into the description.
Adding the programming hacks is easy; writing clear abstract algorithms 
requires care.

.GROUP;
We continue, adding the  components for sums and products:
 a sum #%3u#=#f+g%1  would be represented as
%3(PLUS#f%8T%3#g%8T%3)%1.
The result of differentiating  such a %3u%* is  a new list of three
elements:
1.#the symbol %3PLUS%*,
2.#the effect of %3diff%* operating  %3f%8T%1, and
3.#the effect of %3diff%* operating  %3g%8T%1
.APART

The effect is a new "sum" whose summands are the effect of
%3diff%1 operating on the arguments of the original 
sum. Let us assume the existence
of two abstract selectors, %3arg%41%1 and %3arg%42%1.
Then another part of the algorithm is:

.begin tabit2(13,23);select 3;group;
.fp
issum[u] →makesum[\diff [arg%41%*[u]; x];
\\ diff [arg%42%*[u]; x]];
.end

.GROUP;
.FILL
We know that
d[%3f*g]%*/d%3x%* is defined to be %3f* %1d%3g%*/d%3x + g *%1d%*f/%1d%*x%1;
So:
%3
.begin tabit3(16,26,38);select 3;group
.fp
 isprod[u] → makesum[\makeprod[\arg%41%*[u];
\\\diff[arg%42%*[u]; x]];
\\makeprod[\arg%42%*[u];
\\\diff[arg%41%*[u]; x]]];
.end

Finally, here is the completed S-LISP algorithm for sums and products.
.BEGIN nofill;turn on "\";TABs 3,12,37,42,44,60;select 3;
.GROUP
.krk
.fp
(DEF DIFF (U X)
\ COND \((ISVAR X) (COND (\(SAMEVAR X U) 1)
\\\(T 0)))
\\((ISCONST U) 0)
\\((ISSUM U)(MAKESUM\(DIFF (ARG1 U) X)
\\\\(DIFF (ARG2 U) X)))
\\((ISPROD U) (MAKESUM \(MAKEPROD\(ARG1 U)
\\\\\\(DIFF (ARG2 U) X))
\\\\\(MAKEPROD\(ARG2 U)
\\\\\\(DIFF (ARG1 U) X))))
\\(T (ERROR ))))
.APART
.END
.select 1
.fp
And here are some of the components to relate our abstract algorithm to the
current representation:
.BEGIN CENTERIT; SELECT 3;group;tabit2(15,45)
issum[x] <= eq[first[x];PLUS]\makesum[x;y] <= list[PLUS;x;y]

isconst[x] <= numberp[x]\samevar[x;y] <= eq[x;y]
.END
.BEGIN TABIT3(20,26,31);

.GROUP
Try:##%3diff [(PLUS (TIMES X Y) X); X]
.pt18
\=list [PLUS; diff[(TIMES X Y); X]; diff [X;X]]
\=list [PLUS; 
\\list [PLUS; 
\\\list [TIMES; X; diff [Y;X]];
\\\list [TIMES; Y; diff [X;X]]];
\\diff [X;X]]
.apart
.group

\=list [PLUS;
\\list [PLUS;
\\\list [TIMES; X ;0];
\\\list [TIMES; Y;1]];
\\1 ]
.apart
.group
\=(PLUS  (PLUS (TIMES X 0) (TIMES Y 1)) 1)

.fp
%1
which can be reinterpreted as:%3####x*0 + y*1 + 1 .
%1
.END

It is clear that we have the correct answer; it is equally clear that
there are
simplifications which should be done before anyone would
it acceptable. 

This example is a
particularly simple problem for algebraic simplification. We can easily
write a LISP program to perform simplifications like those expected here:
like replacing %30*x%1 by %30%*, and %3x*1%* by %3x%*. 
A  branch of computer science is involved with general prolems of
symbolic and algebraic manipulation.
A particularly nice micro-version of 
one such system, MACSYMA, 
has been built by a group at the University of Hawaii. They have
contributed a  paper examining several symbolic mathematics
systems. See ****
.SS(Tree Searching)
A ubiquitous feature of sophisticated game playing is "a strategy".
In simple gamesmanship, e.g. tic-tac-toe, the strategy may be quite simple,
indeed easily computable.
In more complex games like checkers and chess, the algorithmic approach is in for
tough sledding. The heart of much of this strategy formation is often
a tree structure.  That tree will have nodes representing "possible moves".
In a single-person game, the  evaluation of the tree will result in
a "best move"; any move that wins. In a two-person game we must be more careful;
the branching structure will represent %2both%1 your moves and those
of our opponent, and the position evaluation must
take that into account: "Now if I move here, then  he'll do that,#...
 We will consider both kinds of tree evaluations, presenting
the simpler ones first.

Assume that any node in a tree can have any finite
number of branches, and   that the
trees will terminate on all of their branches;
we also assume the existence
of a recognizer
 named %3is_term%1, which will return %et%1 if the tree is the trivial
terminal tree  with no branches. A terminal tree may either be a %3WIN%1 or a 
%3LOSS%1. If it's a win, we know how to achieve our goal; if it's a 
%3LOSS%1, then we  look further. That "further" says examine the alternatives
of the immediately preceeding node; if there aren't any alternatives
then back up to %2its%1 predecessor.
If a tree has branches we will select them with the selector %3branches%1.

.BEGIN GROUP;SELECT 3;TABIT2(19,23);
eval_tree[tr] <= [\is_term[tr] → [is_win[tr] → tr; %et%3 → %3LOSS%1]
\%et%3 → eval_branches[branches[tr]]]

eval_branches[l] <= [\null[l] → LOSS;
\\eq[LOSS;eval_tree[first[l]]] → eval_branches[rest[l]];
\\%et%3 → first[l]]
.END
We used a few undefined LISP functions, but their definitions should be clear
from context. What should be equally clear is the structure of the algorithm.
No reams of code, only two simple recursive definitions. 
Translate the algorithms to list notation; supply
the  missing selectors and recognizers (no constructors here), and
 the programs will run.

Now we consider two-person games; 
this class of game includes checkers and chess. 
In the following discussions we will make several assumptions.
.BEGIN INDENT 0,2,2;GROUP;
%21.%1 Our opponent is as smart as we are. Obviously false, but
this assumption will  be reflected  by using  our evaluation function
in evaluating the posiitions of our opponent.
.indent 0,2,2
%22.%1 We 
assume that our opponent is also trying to win. Therefore his move will
reflect his attempt to do us maximal harm, trying to mix up our
winning strategy. Since we are using the same position-evaluator,
his "maximal harm" is our "minimal good".
We therefore follow a "max-min" strategy 
 wherein we attempt to find our best move
which our opponent cannot turn into    a disaster  for us. 
.END

From these ideas we formulate our position evaluation strategy as follows:
.BEGIN INDENT 0,2,2;
%21.%1 Grow a tree of moves. First our possible moves from a position, then
his counter moves; then our responses,#... Continue this until the
branch terminates or until we get tired.
.indent 0,2,2
%22.%1 Once the tree is built, we evaluate the terminal nodes.
.indent 0,2,2
%23.%1 The values are propagated back up the tree using the min-max idea.
If the preceding node is ours, we assign that node the maximum of the
branch values; if the preceding node is his we assign the minimum of the
values. We proceed in this fashion, finally returning the value of the
"best path".
.END

We will simplify matters somewhat, returning only the "value" of the
best path (leaving the player in a state of fury since we won't tell
him how we found that path).
 First we develop some  subfunctions:

.BEGIN group;select 3;turn off "∞";tabit1(43);
maxlist[l;f] <= [null[l] → -∞; %et%3 → max[\f[first[l]];
\maxlist[rest[l];f]]

minlist[l;f] <= [null[l] →  ∞; %et%3 →  min[\f[first[l]];
\minlist[rest[l];f]]
.END
Here the "%3∞%1" means "giant number", bigger than any other value
our evaluation function %3f%1 can concoct. Second, the use of %3f%1
is new.
It is used as a LISP function within the bodies of the definition, yet 
is is passed in as a parameter. It is uses as a %2functional argument%1.
even though we write abstractly, we
know that the arguments to LISP functions must be lists. Fortunately, LISP
supports a representation of functions as data objects, thereby
allowing  functions  as  
parameters and values.
Here is a simple  example:
.BEGIN CENTER;SELECT 3;
maxlist[(1 3 5 2);add1] = 6 %1and%3  minlist[(1 3 5 2);add1] = 2.
.END
With those preliminaries, here's min-max.

.BEGIN SELECT 3; GROUP;
maxpos[p] <= [is_term[p] → value[p]; %et%3 → maxlist[branches[p]; minpos]]

minpos[p] <= [is_term[p] → value[p]; %et%3 → minlist[branches[p]; maxpos]]
.END
.pt24
.fp
%3maxpos%1 gives the  value of a position for the maximizing player
and %3minpos%1 gives the  value of a position for the minimizing player.
##%3value%1 is the terminal position evaluation function. 

What's even more interesting is that there is a simple technique which
will allow us to discover the optimal path  usually without having to
visit all the nodes. The technique, discovered by John McCarthy in 1958,
is called  %7a%2-%7b%2#pruning%1; it is based on the observation that
if  our opponent is assured that he can force us into an unfavorable position
then he won't make a move which would give us a %2better%1 position.
The insight: the player can
often make such decisions
on the basis of only a partial evaluation of the tree.
Consider:
.BEGIN SELECT g;NOFILL;PREFACE 0 MILLS;GROUP;
.TURN OFF "%","#";
.TURN ON "∞" FOR "%";
             O
      ⊂αααααα∀ααααα⊃	opponent∞1'∞gs moves
      N            M
  ⊂αααβααα⊃    ⊂αααβαα ...⊃    our moves
  7   3   4    ?    ... 
.END
.fp
Since we are to evaluate the position at %gN%1, we maximize the  position,
getting %g7%1; that becomes the value of node %gN%1. It is up to our
opponent to evaluate position %gO%1, and he now knows we're going to
get a %g7%1 if he moves to %gN%1. He looks questioningly at "?"; if that
value is greater than %g7%1 then he immediately rejects move %gM%1 without
examining the other possibilities; things can only get worse for him. 
If "?" is less than
%g7%1, then he looks at additional alternatives at %gM%1. Once our
opponent is finished evaluating the position,  then it's our turn
to play the game at the position above %gO%1, only now we will try to maximize
what our opponent has left us.
We let %7a%1 be the value which must be exceeded  for a position to be
desirable by the participant about to play; and let %7b%1 be  the value which must
%2not%1 be exceeded if the move leading to the position would be make by the
opponent; in the above example %g7%1 is the %7b%1-value when evaluating
%gM%1. Let's modify the min-max algorithms to include %7a%1-%7b%1
pruning.
.BEGIN group;select 3;turn off "∞";tabit2(26,41);

maxlist%4αβ%3[l;f;%7a%3;%7b%3] <= [\null[l] → %7a%3; 
\f[first[l]] %9≥%1 %7b%3 → %7b%3;
\%et%3 → maxlist%4αβ%3[\rest[l];
\\f;
\\max[%7a%3;f[first[l]]];
\\%7b%3]]
.end

.BEGIN group;select 3;turn off "∞";tabit2(26,41);
minlist%4αβ%3[l;f;%7a%3;%7b%3] <= [\null[l] → %7b%3; 
\f[first[l]] %9≤%1 %7a%3 → %7a%3;
\%et%3 → minlist%4αβ%3[\rest[l];
\\f;
\\%7a%3; 
\\min[%7b%3;f[first[l]]]]]
.END
.BEGIN group;select 3;turn off "∞";tabit1(25);

maxpos%4αβ%3[p;%7a%3;%7b%3] <= [\is_term[p] → max[%7a%3;min[%7b%3;value[p]]; 
\%et%3 → maxlist%4αβ%3[branches[p]; minpos%41%3;%7a%3;%7b%3]]

minpos%41%3[x] <= minpos%4αβ%3[x;%7a%3;%7b%3]

minpos%4αβ%3[p;%7a%3;%7b%3] <= [\is_term[p] → max[%7a%3;min[%7b%3;value[p]]; 
\%et%3 → minlist%4αβ%3[branches[p]; maxpos%41%3;%7a%3;%7b%3]]

maxpos%41%3[x] <= maxpos%4αβ%3[x;%7a%3;%7b%3]


.END
The process can be initialized with %7a%1 and %7b%1 set to %3-∞%1 and %3∞%1
respectively. Tighter bounds on "acceptablility" can be enforced by
picking different %7a%1's and %7b%1's. The effect will be to shorten the
search time will, perhaps ignoring some exceptional moves; %3caveat emptor%1.

This %2not%1 a trivial algorithm. However its  description as a LISP 
program is about as simple and as compact as you will find; anywhere.



The introduction of functions as  full-fledged data objects in programming
languages dates to LISP in 1960; few languages have supported functional
data as well. Their implementation is as subtle as their power in
representing problems. 
In his paper,
%3Functions as Objects%1, Phil Agre discusses both aspects of their
behavior.
.SS(Simple computers)
Though the execution  on LISP programs may seem foreign to 
the architecture of a micro computer, in fact a LISP architecture
is really quite conventional; It is simply an "unconventional"
perspective on a conventional machine.

We  begin by looking at the macroscopic structure of a contemporary
machine.  Its basic item of information is called a %2word%1.
Each such word contains the same number of bits.  
Words are not simply  scattered about like loose
paper on a desk top, rather, they are arranged neatly as a %2vector%1
like the pages in a book. Each word is assigned  a "page number" called
its %2address%1; and the "next-to" relation is just "add-one".
LISP memory, on the other hand, is organized as a tree with two
"next to" relations (%3first%1 and %3rest%1)
now encoded in the data items themselves.

Contemporary machines gain their power from  the
 elegant idea 
of a "stored program" wherein
both instructions and data are stored in the machine.
The machine must also  have a way of selecting which instructions 
to execute. In
some older machines  the instructions included an explicit field specifying 
which address contained the next instruction. Most modern computers
have replaced that generality with an implicit incrementation to the next
location unless otherwise directed. 
The basic cycle of a contemporary machine is something like:
.GROUP;
.BEGIN TABIT2(27,30);GROUP;TURN OFF "←";

\%5l:%3\C(%2IR%3) ← C(C(%2PC%3))
\\C(%2PC%3) ← C(%2PC%3) + 1
\\ %5execute %3C(%2IR%3)
\\%5go to l
.END
.APART
.FP
%3C(x)%1 is read "contents of %3x%1"; the arrow "←" is read "replaced by".
The %2IR%*, or %2Instruction register%1, is an internal register 
used to hold the instruction we are to execute. 
The  locus of control is maintained
by
the %2program counter%1 (%2PC%1).
If %2PC%1 references
a simple data handling instruction the machine executes the operation
and then prepares to execute the next instruction in the sequence. 
Control operations --those which modify program flow, like jumps, 
conditional branches, or subroutine calls-- are performed by changing the 
contents of the %2PC%1.
Soon we will demonstrate a similar "execution cycle" for a LISP machine, except
there the "program counter" follows the tree-structure of the program.

The duality of program and data which LISP enjoys is also present on
contemporary hardware.
Most machines assume that 
any  word can contain an instruction or data,
and
  the interpretation of a word
depends on the way the  machine accesses the word.
The contents of a location can even be used both as an instruction
and as data; if location %3101%1 contained the representation of the
instruction %3ADD#101%1, then  the execution cycle for that instruction would
involve location %3101%1 as both instruction %2and%1 data.
This dual role of instruction and data occurs in less ad hoc situations.
A  loader receives the "data" form of instructions
from the  assembler,
acts on them as data items, and loads them into memory;
it does not execute them. 
In terms of a contemporary machine this means:
.BEGIN GROUP;INDENT 5,5,5;
.pt24
If a location is referenced by the %2PC%1 then its contents is decoded
as an instruction. If a location is referenced as an operand then its
contents is taken as data.
.pt24
.END
.fp
Translated to LISP terms it  becomes:

.begin indent 5,5,5;group;
We have a processing unit (named %3eval%1) which accesses LISP memory
if a memory element (list) is accesses in the instruction fetch
then that element is decoded as a LISP instruction, otherwise
the item is accepted as data.
.end
.pt24
We know, from 
the nature of LISP operations, that this processing operation can become quite
complex, even recursive; but the basic machine cycle is quite traditional.
As an abstract
recursive process  the evaluation of expressions involves
  termination conditions:
 the recursive evaluation of an expression terminates
when its value is a constant; "the value of 1 is 1"; the value of
"%3(A#B#C)%1 is %3(A#B#C)%1. However, what's %2inside%1 the LISP machine
is the %2representation%1 of %3(A#B#C)%1, namely %3(QUOTE#(A#B#C))%1,
so our processor also needs to watch for %3QUOTE%1 to stop its operand fetch.
This is but  a slight generalization of the
%2indirect address%1 mechanism of most contemporary machines!
 If an address is
fetched indirectly, it means that contents of the address
are not used  as data, but are interpreted as a further address
and the contents of that further address are examined. If the machine
allows  arbitrarily deep indirect addressing, that further address
is examined to see if %3it%1 specifies additional indirection.
This chain of indirect addressing must terminate with a "real" address
if the instruction which instigated this search is ever to complete
its execution.

Conceptually, a LISP machine is a very conservative extension of
traditional Von#Neumann architecture. Its data is more general;
its processing unit is more general; but is stored program concept,
program/data duality, and instruction fetch are only slight
different. That slight difference in conception makes a world
of difference in power.

We now  give an informal description of LISP evaluation, and following that,
amore precise LISP description.
Let %3exp%1 be a representation of an expression, and let %3env%1
represent a symbol table  or environment of names and values:
.pt24
%21.%1 If %3exp%1 represents a constant then LISP gives that represented value.
.pt18
%22.%1 If %3exp%1 is (represents) a variable, find its value in the environment.
.pt18
%23.%1 If %3exp%1 is a conditional expression, evaluate it using the semantics
of conditional expressions (see {yon(cond)}).
.pt18
%24.%1 If %3exp%1 is a function call, evaluate its arguments using this algorithm,
and then apply those arguments to the function.
.pt24;fp
Actually, LISP has several styles of function call; this one is %2call-by-value%1.
In %3Functions as Objects%1 Phil Agre discusses these generalized notions and
their implementations.

.BEGIN NOFILL;TURN ON "\";KRK;TABS 12,40,41,42,48
.group
.SELECT 3;

eval[exp;env] <= 
\[is_const[exp] → denote[exp];
\ is_var[exp] → lookup[x;env];
\ is_cond[exp] → eval_cond[exp;env];
\ is_expr_call[exp] → apply[\funct[exp];
\\\evargs[args[exp];env];
\\\env]]
.END

where:

.begin select 3; nofill;turn on "\";krk; tabs 8,28;GROUP;

evargs[params;env] <=
\[empty_form[params] → mk_empty_act[];
\ %et%3 → mk_arg_lst[\eval[first_form[params];env];
\\evargs[rest_form[params];env]]]
.end
Though the obvious representation of %3params%1  is a list, we have
steadfastly kept the representation suppressed. To keep your perspective,
here's a natural map from abstraction to the concrete:

.begin tabit2(20,40);group;

\%2abstract%1\%2concrete%3
\empty_form\null
\mk_empty-act\()
\mk_arg_lst\concat
\first_form\first
\rest_form\rest
.end
.fp
This abstraction would be very helpful if we decided to represent actual and
formal parameter objects as, say, small arrays rather than lists.

.begin select 3; nofill;turn on "\";krk; tabs 8;

eval_cond[cond;env] <=
\[empty[cond] → err[];
\ eval[p_sub_i[head[cond]];env] → eval[e_sub_1[pair[cond]];cond]];
\ %et%3 → eval_cond[tail[cond];env]]
.end
.fp
In the above algorithm %3head%1 selects the first %2<p%4i%2>%1-%2<e%4i%2>%1
pair and %3p_sub_i%1 selects (the representation of) the predicate position.
This algorithm encodes conditional expression evaluation 
in a  somewhat incestous fashion, using conditional expressions itself. In
an implementation %3eval_cond%1 would be encoded at a lower level, but
for expressive purposes this level of discussion is much more satisfactory.

Function application involves at least two cases: if the function was
defined using %3DEFINE%1 then we 
locate our freshly evaluated arguments, associating them with the formal
parameters of the procedure, and then evaluate the body of the function
in this new environment. If the function application involves a primitive
function, then we expect the primitive to do its job.

.BEGIN NOFILL;TURN ON "\";KRK;TABS 17,39,48
.SELECT 3;group
apply[fun;vals;env] <=
\[is_proc[fun] → eval[\body[fun];
\\makenv[\makeblock[vars[fun];vals];
\\\env]];
\is_prim[fun] → funcall[fun;vals;env];

.END
We think of environments as consisting of blocks. A block
is created whenever we enter a function (see %3apply%1);
usually it goes away when we leave the function. The interrelationships
between the object and algorithms in environment manipulations are some of the
most thorny in programming language design. They involve 
a discussion of 
at least "scoping rules":
%3when%1 does a variable assume a value; "binding strategies": %3how%1 does a
variable assume a value;  and "functional objects": %3what%1 is an implementation
of functional variables. Such a discussion is too detailed for our purposes.
We  need only give a flavor of what variable retrieval entails; a simple
recursive description will suffice.

.BEGIN NOFILL;TURN ON "\";KRK;TABS 22
.SELECT 3;group
lookup[x;env] <= [\empty[env] → err[ ];
\find[x;block] → value[x;block];
\%et%3 → lookup[x;prev[env]]]

.END

mumble greussay, prini



This "conceptual LISP machine" defines an interpreter;
it should be understandable even though
we have described  it a a very high level of abstraction; indeed using
the very concepts and formalism which we are trying to capture in the machine.
This style of definition is called "meta-circular"; it is an elegant
technique for describing a programming language. Of course, when
we move to "real" implementation the mechanisms like recursion, parameter
passing, and symbol table maintenance must be encoded in the host
processor. It is  astonishing to discover how little of that encoding
must be machine dependent. This is a key ingredient to "portability". 
The argument goes as follows. Given a LISP system,
including a compiler, on machine %dM%41%1, modify the compiler to produce
code for machine %dM%42%1. 
Assuming that the compiler was written in a clean,
well-structured fashion, it should be reasonably simple to replace the
%dM%41%1 code generators  with generators for machine %dM%42%1;
this task is simplified immensely since all LISP compilers are 
written in LISP.
Take all pieces of the LISP system which are expressible in LISP and
pass them through the modified compiler (running on machine %dM%41%1).
The output from that process will be code which will run on %dM%42%1.


****lil, and griss(?)


LISP constructors, like
%3concat%1 (%3cons%1) and %3list%1, bring new objects into existence.
Those objects must be manufactured; there is a non-trivial problem
in LISP storage management, and if this is to be a "production version of LISP"
much care needs be taken in supplying a full complement of data types
along with their storage management techniques. 


***garbage collectors  ---prini-greussay


More mundane problems like input and output
must also be solved; the large bulk of even %3these%1 algorithms can be expressed 
in LISP. The basic components of a language's input processing can be 
categorized as a scanner and a parser.

.BEGIN INDENT 0,7;
.group
%3ratom[ ]%* is  the LISP
%2scanner%1. It reads the input string, constructing
the next atom
or special character (left parenthesis or right parenthesis).
It looks up that object in the atom table and returns
a pointer to that  table entry.  If no entry
is found an appropriate entry is made.
%3ratom%*  flushes spaces and commas,
recognizing  them as delimiters. It returns only atoms or special characters
to %3read%*. 
.END

To  simplify matters, we need to  refer to atoms which will print
the characters "%3)%*", "%3(%*", and " "#(blank).  
We will assume that %3RPAR%*, %3LPAR%*,
 and %3BLANK%* denote such atoms.  For example, if the next input
character is "(" then
####%3eq[ratom[ ];LPAR]%*### is true (and the input pointer is moved to
the next character).

.BEGIN TABIT2(17,21);TURN OFF"←";SELECT 3;
.BEGIN FILL;select 1;
%3read%* is the LISP %2parser%1, building the internal list-structure
from the lower level components it receives from the scanner. 

.END

.GROUP
.BEGIN TABIT2(11,15);TURN OFF"←";SELECT 3;
%3read[] <= read%41%3[ratom[ ]]

read%41%3[j] <=\[atom[j] → j;
\\ is_lpar[j] → read_rest[ ];
\\ %et%3 → err[ ]]
.END
.begin select 1; fill
The call on
%3err%1  will terminate the input scanning
immediately.
.end
.APART
.GROUP;
.BEGIN FILL;SELECT 1;
%3read_rest%* will translate strings %9α%*  acceptable in the context "%3(%9α%1".
If it sees: 
.END
.BEGIN NOFILL;SELECT 1;
\an atom, then %9α%1 is <atom>%9β%3)%1;
\a left parenthesis, then %9α%1 is %3(%9β%3)%9∂%3)%1;
\a right parenthesis, then %9α%1 is %3)%1.
.END
.begin fill;select 1;
Thus %9α%* being %3"A)"%* or %3"A#(B#C))"%* would be suitable for %3read_rest%*,
since %3(A)%* and %3(A#(B#C))%* are  lists.
.end
.APART
.GROUP

.BEGIN TABIT1(19);TURN OFF"←";SELECT 3;
%3read_rest[ ] <= read_rest%41%3[ratom[ ]]]

read_rest%41%3[j] <=\[atom[j] → concat[j;read_rest[]];
\ is_lpar[j] → concat[read_rest[ ];read_rest[ ]];
\ is_rpar[j] → NIL];
\ %et%3 → err[]]
.END
.APART
.END


Though 
much of %3ratom%1 can also be expressed in LISP, we will only sketch
informally one of its interesting properties in dealing with atoms.
 Atoms in LISP aren't
really atomic. Rather, they act like entries in a natural language
dictionary; each literal atom has an entry in LISP's dictionary.
As in natural language, each entry may have several meanings stored with it.
These alternative meanings will all
be listed with that word and usually organized as a list of pairs --parts of
speech and corresponding meaning. LISP's dictionary  organization is similar:
each atom entry has an associated list called a %2property list%1,
or a %2p-list%1,  containing
all the information currently available about that atom.
One of the responsibilities of %3ratom%1 is to assure that every instance
of an atom refers to the unique p-list for that atom.
The p-list is organized as %2attribute-value pairs%1, where an attribute
might be an indication that the associated atom names a function and the
value part would describe the definition. 
Property lists 
  are  a generalized form of record structure since the number of
entries may row and shrink during the computation. Property lists 
are a powerful tool for organizing data base information; one can
organize  information as an interconnected networks of atoms, attributes, and
values; and then construct search and retrieval algorithms in LISP.
In his paper %3 Pattern-directed Invocation Languages%1, Robert
Kornfeld describes some applications of AI research to intelligent
data base manipulation.


Traditionally all the implementation problems --storage management,
variable binding, p-list manipulation, etc -- have
been  delt with in software.
An exciting alternative is to buld LISP machines in hardware, thereby
"raising the programming floor" to a much more acceptable machine
level than has been previously available. Several very healthy projects
exist, from re-microcoded machines, through specially constructed hardware,
to experiments with VLSI LISP chips. 
Several of these efforts will be documented in the October
issue of the IEEE Transaction on Computers. It is clear that LISP is only
beginning to impact on the computing community. 

****education: laubsch

.next page

.begin nofill

The initial article is an  intensive LISP tutorial covering the  language,
the programming style,  and some of  the background details  for the  more
detailed articles.


The first  group  of  papers represent  several  diverse  applications  of
LISP-like ideas.

The paper of  David Stoutemyer,  a professor  of computer  science at  the
University of Hawaii,  discusses the  realm of  symbolic mathematics.   Of
particular interests to the BYTE readership is his description of  muMATH,
an elegant symbolic mathematics system  implemented in a LISP dialect  and
running on Z80/8080 based micro computers.

Richard Weyhrauch, a Research Associate at the Stanford AI Laboratory, and
Henson Graves, a professor  of Mathematics at  San Jose State  University,
apply LISP to Boolean logic.  Their article uses combinatorial circuits to
illustrate   some   programming    problems   whose   solutions    exploit
characteristic features of  LISP:  recursive data  structures, lists,  and
the ability to manipulate programs as data structure.


Robert Kornfeld, a graduate student at  MIT, shows an application of  LISP
ideas in the artificial  intelligence domain. Pattern directed  invocation
is a powerful technique  for representing and  manipulating facts in  data
bases.  The implementation of these ideas involves two facets of LISP: the
generalized record structures, called property  lists; and the ability  to
store procedures as data structures.


The preceeding papers have applied several of the unique features of  LISP
to the  solution of  non-trivial  problems.  In  the  next paper  the  MIT
mathematician and computer scientist, Vaughan Pratt, views languages  from
a more distant perspective.  He shows  that those features which we  found
so attractive in the preceeding papers are instances of general principles
which  a   programming  language   must  abide   by  if   generality   and
expressibility are not to be compromised.

Phil Agre's  paper,  "Functions  as Data  Objects",  represents  a  bridge
between  the   philosophical  ideas   expressed   by  Vaughan,   and   the
implementor's  dilemma  of   supporting  generality  without   sacrificing
efficiency. Phil is an undergraduate  mathematics major at the  University
of Maryland, and will be beginning  his graduate work in computer  science
at MIT this fall.


We now  move  further  in  to  implementation  ideas;  Jerome  Chailloux's
discusses the details of VLISP on z80/8080 microprocessors.  Jerome's work
at the University  of Paris (Vincennes)  is an elegant  blend between  the
constraints of current microprocessors and  the desire for a non-toy  LISP
implementation.  It is a  comfort to discover  that our high  expectations
can begin to be satisfied on traditional micros.

The next paper, by Gianfranco Prini and Martin Rudalics, discusses storage
management. Memeory appears infinite to a  LISP user; when new storage  is
required it magically appears out of the "ether". Since memory  technology
hasn't  quite  solved  this  problem,  LISP  systems  contain  a   storage
reclamation package whose  responsibility it  is to  scavenge new  storage
from discarded computation.

Finally, Joachim Laubsch and his colleagues, summarize the spirit of these
papers and  the  March  editorial.  They  discuss  the  evolving  computer
culture and  argue  that  the  basic ideas  embodied  in  AI-like  systems
represent the beginnings for intelligent use of personal computer systems.
Further, they argue that  the basic concepts  and approach to  computation
which  LISP   represents   offers  significant   advantages   within   the
contemporary educational framework.

These papers cover only a small  sample of the LISP culture; however  they
are representative of the creativity which LISP spawns. The concern  which
initiated my March editorial  and which sustained  the production of  this
issue, is my interest in improving the quality of personal computing.  The
topics described  in  October  1978  BYTE  reader  survey  represent  very
agressive and ambitious goals; they will not be attainable without quality
tools and clear understanding  of computation. Reinforce your  inquisitive
minds with an understanding of LISP and a personal LISP machine.

.end

additional  bibliography

Pirsig, R. Zen and the Art of Motorcycle Maintenance, Bantam Books, 1974

IEEE Transactions on Computers,  Special Issue on LISP Machines,  Oct 1979

Charniak, E., Riesbeck, C., McDermott, D., Artificial Intelligence Programming,
Lawrence Erlbaum, New Jersey, 1979

Shapiro, S, Techniques of Artificial Intelligence, Van Nostrand, 1979

Freidman, D, The Little LISPer, SRA, 1974
Ray:

enclosed is:

1. additional "about the author stuff" to be added to the "in this issue"
material from page 1 of my article.

2. some additional bibliographic referneces to be added to the end of my
article.


3. a marked up version of my paper, basically fixing up the spots which
depended on papers not written at the time. I tried to make the
changes as small as possible.

The response to my editorial has been incredible! This issue will be a classic.

					john
P20

This parameter passing mechanism is refered to as call by value:
we evaluate all of the parameters and pass those values to the  function.
Notice that this is not how we evaluate conditional expressions;
in that case we only evaluate those parts of the conditional
which are necessary.

The next few paragraphs sketch the evaluator expressed in 1. through 4.;
we have left some details for the reader: the recursive functions
must be translated into S-lisp before they are presented to a machine, and
several constructors, selectors, and recognizers must be made explict.

p22;line 4.

Such a discussion is too detailed for our purposes (see the papers
by Agre and Chailloux for more information)




p.22 line bottom-11
LISP presents other interesting problems for the implementor.
LISP constructors, like concat, cons, and list, create new objects.
There are two problems to be addressed: how to allocate
a finite amount of memory such that it is used effectively, and how to
recycle discarded memory after a computation has finished with it.
Some languages require the programmer to 
manage the allocation and deallocation of memory; LISP does not.
The creation and destruction of memory is totally transparent to the
user and to the language. This decision freed the user from details of
storage structures, and gave programmers the beginnings of true
abstract data structures. Of course nothing is free; the decision
made the LISP implementor's job more difficult. Allocation was handled
by parcelling out new objects from a "free space" area, but deallocation
required a technique for discovering discarded cells, followed by 
a mechanism for turning those cells back into free space.
The solution, originated by LISP, is called
garbage collection.  What is truly amazing is that the understanding
of the benefits of abstract data, and the conception of the garbage collection
solution are a product of the first LISP implementation in the late
1950's!  Since that time much work has been done to refine both storage
allocation and deallocation techniques. The papers by  Chailloux,
and  Prini and Rudalics discuss these techniques in more detail.

glossary